Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros










Base de dados
Assunto principal
Intervalo de ano de publicação
1.
Small ; : e2400096, 2024 Mar 22.
Artigo em Inglês | MEDLINE | ID: mdl-38516956

RESUMO

The extremely poor solution stability and massive carrier recombination have seriously prevented III-V semiconductor nanomaterials from efficient and stable hydrogen production. In this work, an anodic reconstruction strategy based on group III-V active semiconductors is proposed for the first time, resulting in 19-times photo-gain. What matters most is that the device after anodic reconstruction shows very superior stability under the protracted photoelectrochemical (PEC) test over 8100 s, while the final photocurrent density does not decrease but rather increases by 63.15%. Using the experiment and DFT theoretical calculation, the anodic reconstruction mechanism is elucidated: through the oxidation of indium clusters and the migration of arsenic atoms, the reconstruction formed p+-GaAs/a-InAsN. The hole concentration of the former is increased by 10 times (5.64 × 1018 cm-1 increases up to 5.95 × 1019 cm-1) and the band gap of the latter one is reduced to a semi-metallic state, greatly strengthening the driving force of PEC water splitting. This work turns waste into treasure, transferring the solution instability into better efficiency.

2.
IEEE Trans Image Process ; 32: 4036-4045, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37440404

RESUMO

Transformer, the model of choice for natural language processing, has drawn scant attention from the medical imaging community. Given the ability to exploit long-term dependencies, transformers are promising to help atypical convolutional neural networks to learn more contextualized visual representations. However, most of recently proposed transformer-based segmentation approaches simply treated transformers as assisted modules to help encode global context into convolutional representations. To address this issue, we introduce nnFormer (i.e., not-another transFormer), a 3D transformer for volumetric medical image segmentation. nnFormer not only exploits the combination of interleaved convolution and self-attention operations, but also introduces local and global volume-based self-attention mechanism to learn volume representations. Moreover, nnFormer proposes to use skip attention to replace the traditional concatenation/summation operations in skip connections in U-Net like architecture. Experiments show that nnFormer significantly outperforms previous transformer-based counterparts by large margins on three public datasets. Compared to nnUNet, the most widely recognized convnet-based 3D medical segmentation model, nnFormer produces significantly lower HD95 and is much more computationally efficient. Furthermore, we show that nnFormer and nnUNet are highly complementary to each other in model ensembling. Codes and models of nnFormer are available at https://git.io/JSf3i.

3.
Med Image Anal ; 80: 102506, 2022 08.
Artigo em Inglês | MEDLINE | ID: mdl-35696875

RESUMO

Training deep segmentation models for medical images often requires a large amount of labeled data. To tackle this issue, semi-supervised segmentation has been employed to produce satisfactory delineation results with affordable labeling cost. However, traditional semi-supervised segmentation methods fail to exploit unpaired multi-modal data, which are widely adopted in today's clinical routine. In this paper, we address this point by proposing Modality-collAborative Semi-Supervised segmentation (i.e., MASS), which utilizes the modality-independent knowledge learned from unpaired CT and MRI scans. To exploit such knowledge, MASS uses cross-modal consistency to regularize deep segmentation models in aspects of both semantic and anatomical spaces, from which MASS learns intra- and inter-modal correspondences to warp atlases' labels for making predictions. For better capturing inter-modal correspondence, from a perspective of feature alignment, we propose a contrastive similarity loss to regularize the latent space of both modalities in order to learn generalized and robust modality-independent representations. Compared to semi-supervised and multi-modal segmentation counterparts, the proposed MASS brings nearly 6% improvements under extremely limited supervision.


Assuntos
Aprendizado Profundo , Humanos , Imageamento por Ressonância Magnética , Tomografia Computadorizada por Raios X
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...